Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Zero-Shot Learning has been a highlighted research topic in both vision and language areas. Recently, most existing methods adopt structured knowledge information to model explicit correlations among categories and use deep graph convolutional network to propagate information between different categories. However, it is difficult to add new categories to existing structured knowledge graph, and deep graph convolutional network suffers from over-smoothing problem. In this paper, we provide a new semantic enhanced knowledge graph that contains both expert knowledge and categories semantic correlation. Our semantic enhanced knowledge graph can further enhance the correlations among categories and make it easy to absorb new categories. To propagate information on the knowledge graph, we propose a novel Residual Graph Convolutional Network (ResGCN), which can effectively alleviate the problem of over-smoothing. Experiments conducted on the widely used large-scale ImageNet-21K dataset and AWA2 dataset show the effectiveness of our method, and establish a new state-of-the-art on zero-shot learning. Moreover, our results on the large-scale ImageNet-21K with various feature extraction networks show that our method has better generalization and robustness.
translated by 谷歌翻译
Generalist models, which are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model, have been explored recently. Being, hopefully, an alternative to approaching general-purpose AI, existing generalist models are still at an early stage, where modality and task coverage is limited. To empower multi-modal task-scaling and speed up this line of research, we release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction. At the core of OFASys is the idea of decoupling multi-modal task representations from the underlying model implementations. In OFASys, a task involving multiple modalities can be defined declaratively even with just a single line of code. The system automatically generates task plans from such instructions for training and inference. It also facilitates multi-task training for diverse multi-modal workloads. As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data. The single OFA+ model achieves 95% performance in average with only 16% parameters of 15 task-finetuned models, showcasing the performance reliability of multi-modal task-scaling provided by OFASys. Available at https://github.com/OFA-Sys/OFASys
translated by 谷歌翻译
In this paper, we consider incorporating data associated with the sun's north and south polar field strengths to improve solar flare prediction performance using machine learning models. When used to supplement local data from active regions on the photospheric magnetic field of the sun, the polar field data provides global information to the predictor. While such global features have been previously proposed for predicting the next solar cycle's intensity, in this paper we propose using them to help classify individual solar flares. We conduct experiments using HMI data employing four different machine learning algorithms that can exploit polar field information. Additionally, we propose a novel probabilistic mixture of experts model that can simply and effectively incorporate polar field data and provide on-par prediction performance with state-of-the-art solar flare prediction algorithms such as the Recurrent Neural Network (RNN). Our experimental results indicate the usefulness of the polar field data for solar flare prediction, which can improve Heidke Skill Score (HSS2) by as much as 10.1%.
translated by 谷歌翻译
气道分割对于检查,诊断和预后的肺部疾病至关重要,而其手动描述则不当。为了减轻这种耗时且潜在的主观手动程序,研究人员提出了从计算机断层扫描(CT)图像自动分割气道的方法。但是,一些小型气道分支(例如,支气管和终末支气管)显着加剧了通过机器学习模型的自动分割难度。特别是,气道分支中体素值和严重的数据失衡的方差使计算模块容易导致不连续和假阴性预测。注意机制表明了分割复杂结构的能力,而模糊逻辑可以减少特征表示的不确定性。因此,由模糊注意力层给出的深度注意力网络和模糊理论的整合应该是升级的解决方案。本文提出了一种有效的气道分割方法,包括一个新型的模糊注意力神经网络和全面的损失函数,以增强气道分割的空间连续性。深层模糊集由特征图中的一组体素和可学习的高斯成员功能制定。与现有的注意机制不同,所提出的特异性模糊注意力解决了不同渠道中异质特征的问题。此外,提出了一种新的评估指标来评估气道结构的连续性和完整性。该方法的效率已通过在包括精确的09和LIDC数据集在内的开放数据集上进行测试,以及我们的内部Covid-19和纤维化肺病数据集证明了这一建议的效率。
translated by 谷歌翻译
现有的最佳3D对象检测器通常依赖于多模式融合策略。但是,由于忽略了特定于模式的有用信息,因此从根本上限制了该设计,并最终阻碍了模型性能。为了解决这一局限性,在这项工作中,我们介绍了一种新型的模式相互作用策略,在该策略中,在整个过程中学习和维护单个单模式表示,以使其在物体检测过程中被利用其独特特征。为了实现这一建议的策略,我们设计了一个深层互动体系结构,其特征是多模式代表性交互编码器和多模式预测交互解码器。大规模Nuscenes数据集的实验表明,我们所提出的方法经常超过所有先前的艺术。至关重要的是,我们的方法在竞争激烈的Nuscenes对象检测排行榜上排名第一。
translated by 谷歌翻译
真实世界的文本应用程序通常涉及组成广泛的文本控制操作,例如编辑文本W.R.T.属性,操纵关键字和结构,并生成所需属性的新文本。事先的工作通常会学习/芬太尼语言模型(LM)以执行操作的个人或特定子集。最近的研究以插件方式研究了合并操作,通常在复杂序列空间中以昂贵的搜索或优化进行了研究。本文提出了一种新的有效方法,用于在紧凑的文本潜在空间中进行可复合的文本操作。文本潜在矢量的低维度和不同性使我们能够基于给定的任意插入运算符(例如属性分类器)基于普通微分方程(ODE)开发有效的采样器。通过通过有效的适应性将预告片的LMS(例如GPT2)连接到潜在空间,然后我们将采样向量解码为所需的文本序列。灵活的方法允许使用来自不同域中的任何相关数据获取的各种控制操作员(情感,时态,形式,关键字等)。实验表明,在我们的方法中构成这些操作员可以生成或编辑高质量文本,从而在发电质量和效率方面显着改善了以前的方法。
translated by 谷歌翻译
面部表达是传达人类情绪状态和意图的重要因素。尽管在面部表达识别任务(FER)任务中已经取得了显着进步,但由于表达模式的巨大变化和不可避免的数据不确定性而引起的挑战仍然存在。在本文中,我们提出了中级表示增强(MRE)和嵌入图形抑制(GUS)的图表,以解决这些问题。一方面,引入MRE是为了避免表达表示学习以有限数量的高度歧视模式主导。另一方面,引入GUS以抑制表示空间中的特征歧义。所提出的方法不仅具有更强的概括能力来处理表达模式的不同变化,而且具有更强的稳健性来捕获表达表示。对AFF-WILD2的实验评估已验证了该方法的有效性。
translated by 谷歌翻译
人类生理学中的各种结构遵循特异性形态,通常在非常细的尺度上表达复杂性。这种结构的例子是胸前气道,视网膜血管和肝血管。可以观察到可以观察到可以观察到可以观察到可以观察到空间排列的磁共振成像(MRI),计算机断层扫描(CT),光学相干断层扫描(OCT)等医学成像模式(MRI),计算机断层扫描(CT),可以观察到空间排列的大量2D和3D图像的集合。这些结构在医学成像中的分割非常重要,因为对结构的分析提供了对疾病诊断,治疗计划和预后的见解。放射科医生手动标记广泛的数据通常是耗时且容易出错的。结果,在过去的二十年中,自动化或半自动化的计算模型已成为医学成像的流行研究领域,迄今为止,许多计算模型已经开发出来。在这项调查中,我们旨在对当前公开可用的数据集,细分算法和评估指标进行全面审查。此外,讨论了当前的挑战和未来的研究方向。
translated by 谷歌翻译
在过去的两年中,Covid-19-19的到来引起的动荡继续带来新的挑战。在这次COVID-19大流行期间,需要快速鉴定感染患者和计算机断层扫描(CT)图像中感染区域的特定描述。尽管已迅速建立了深层监督的学习方法,但图像级和像素级标签的稀缺性以及缺乏可解释的透明度仍然阻碍了AI的适用性。我们可以识别受感染的患者并以极端的监督描绘感染吗?半监督的学习表明,在有限的标记数据和足够的未标记数据下,表现出了有希望的表现。受到半监督学习的启发,我们提出了一种模型不可静止的校准伪标记策略,并将其应用于一致性正则化框架下,以生成可解释的识别和描述结果。我们通过有限的标记数据和足够的未标记数据或弱标记数据的组合证明了模型的有效性。广泛的实验表明,我们的模型可以有效利用有限的标记数据,并为临床常规中的决策提供可解释的分类和分割结果。该代码可从https://github.com/ayanglab/xai covid-11获得。
translated by 谷歌翻译